Fix 21 security, correctness, and code quality issues#44
Conversation
Priority fixes (security/critical): - Add secureZero utility with volatile writes; zero private keys, HMAC intermediates, seeds, and nonces in secp256k1, signer, wallet, hd_wallet, and mnemonic modules - Add Signer.deinit() and Wallet.deinit() for explicit key cleanup - Fix errdefer in abi_decode to track decoded count (prevents use-after-free on partial decode failure) - Fix hd_wallet.toAddress to return error instead of zero address - Add bounds check for >32 tuple values in abi_encode (stack overflow) - Replace @Panic with optional returns in dex/v2 getAmountOut/getAmountIn - Replace static mutable buffer in eip712 with allocator-based approach (fixes thread safety and reentrancy) - Fix provider.zig errdefer blocks to free inner allocations (data/topics) - Replace custom string helpers in ws_transport and subscription with std.mem equivalents (~70 lines of duplicate code removed) - Add missing modules to unit_tests.zig (13 modules were untested) - Use @CompileError for unsupported RLP types instead of silent zero Medium fixes (correctness/quality): - abi_json: return null for unknown types instead of silent uint256 fallback - multicall: use error.InvalidAbiData instead of error.OutOfMemory for malformed data - eip155: recoverFromEip155V returns ?u8 (null on invalid, not ambiguous 0) - units: parseEther/parseGwei return ?u256 (null on negative/non-finite) - keccak_xkcp: replace debug.assert with @Panic on C FFI failures (asserts are elided in release builds) - abi_encode: remove ~50 lines of dead allocating code path; remove unused allocator param from encodeDynamicValueInto - transaction: widen v parameter from u8 to u256 for EIP-155 support with chain_id > 110 - erc20/erc721: use freeReturnValues instead of allocator.free to prevent inner string memory leaks - secp256k1_c: fix TOCTOU race in global context init with cmpxchg - chain: make addressFromHex comptime-only (compile error on bad input instead of runtime unreachable)
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughMultiple modules adopt optional return types and tighter error propagation, ABI encoding refactored to non-alloc paths with a tuple-size cap, decoding cleanup tracks decoded_count, secure zeroization and deinit hooks added for secrets, and custom string helpers replaced by std.mem utilities. Changes
Sequence Diagram(s)(Skipped — changes are broad and do not introduce a single new multi-component sequential control flow suitable for a compact diagram.) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
src/ws_transport.zig (3)
1-1164:⚠️ Potential issue | 🟡 Minor
zig fmt --checkis still failing on this file.CI is blocked until
src/ws_transport.zigis reformatted withzig fmt.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ws_transport.zig` around lines 1 - 1164, The file fails zig fmt checks; run the formatter and commit the changes so CI passes—specifically run `zig fmt src/ws_transport.zig` (or `zig fmt --write`/IDE formatter) to reformat the entire file including top-level declarations like parseUrl, encodeFrame, decodeFrameHeader, buildHandshakeRequest, computeAcceptKey, WsTransport and its methods (connect, initTls, performHandshake, sendFrame, readFrame, writeAll, readSome). Ensure no manual code changes other than whitespace/formatting are made and re-run `zig fmt --check` to verify.
44-59:⚠️ Potential issue | 🟠 MajorHandle valid WebSocket URLs with IPv6 and query-only targets.
This authority split regresses on valid URLs.
std.mem.indexOfScalar(u8, host_port, ':')breaksws://[::1]:8545/, and splitting the path only on'/'treatsws://host?token=abcas part of the host instead of preserving/?token=abc.
442-448:⚠️ Potential issue | 🟠 MajorSubstring ID matching can return the wrong JSON-RPC response.
Matching on
"id":{d}is not exact: waiting forid = 1will also match"id":12, and valid JSON like"id": 1is missed. This can either return another request's response or keep looping forever depending on formatting. Parse the envelope and compare theidfield exactly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ws_transport.zig` around lines 442 - 448, The current substring match using id_buf/id_str and std.mem.indexOf against frame_data can produce false positives (e.g. id=1 matching "id":12) or miss spaced JSON; instead parse the incoming envelope and compare the parsed "id" field for exact numeric equality to the expected id. Replace the std.mem.indexOf check with a JSON parse of frame_data (using Zig's std.json parser or equivalent), extract the top-level "id" value and compare it numerically to the expected id, returning frame_data only on an exact match and handling parse errors/absent id appropriately.src/mnemonic.zig (1)
175-199:⚠️ Potential issue | 🟠 MajorGuard the fixed-size buffers before copying.
toSeedstill writes intomnemonic_bufandsalt_bufwithout validating capacity first. A passphrase longer than 248 bytes already trips this, and oversized word slices can do the same formnemonic_buf.Suggested fix
pub fn toSeed(words: []const []const u8, passphrase: []const u8) ![64]u8 { const secureZero = `@import`("utils/constants.zig").secureZero; // Build mnemonic string: words joined by spaces var mnemonic_buf: [1024]u8 = undefined; defer secureZero(&mnemonic_buf); var mnemonic_len: usize = 0; for (words, 0..) |word, i| { + if (i > 0 and mnemonic_len == mnemonic_buf.len) return error.InputTooLong; + const available = mnemonic_buf.len - mnemonic_len - `@as`(usize, `@intFromBool`(i > 0)); + if (word.len > available) return error.InputTooLong; if (i > 0) { mnemonic_buf[mnemonic_len] = ' '; mnemonic_len += 1; } `@memcpy`(mnemonic_buf[mnemonic_len .. mnemonic_len + word.len], word); @@ var salt_buf: [256]u8 = undefined; defer secureZero(&salt_buf); const prefix = "mnemonic"; + if (passphrase.len > salt_buf.len - prefix.len) return error.InputTooLong; `@memcpy`(salt_buf[0..prefix.len], prefix); `@memcpy`(salt_buf[prefix.len .. prefix.len + passphrase.len], passphrase);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mnemonic.zig` around lines 175 - 199, The code builds fixed-size buffers mnemonic_buf and salt_buf without bounds checks and can overflow when words or passphrase are too large; update the toSeed logic to validate sizes before copying: check each word.len will fit into mnemonic_buf (use mnemonic_len + word.len <= mnemonic_buf.len) and ensure the passphrase length plus prefix fits salt_buf (prefix.len + passphrase.len <= salt_buf.len); if a check fails, return an appropriate error (or slice an error result) instead of writing, and keep the existing defer secureZero calls; reference mnemonic_buf, mnemonic_len, salt_buf, prefix, words, and passphrase when applying these checks.src/secp256k1_c.zig (1)
32-33:⚠️ Potential issue | 🟡 MinorDeclare and call
secp256k1_context_destroy()on the race-losing context instead of leaking it.The libsecp256k1 library provides
secp256k1_context_destroy()(defined insecp256k1.h), so the losingctxcan be properly freed rather than intentionally leaked. The comment claiming there is no destroy function in the API is incorrect.Suggested fix
extern fn secp256k1_context_create(flags: c_uint) ?*secp256k1_context; +extern fn secp256k1_context_destroy(ctx: *secp256k1_context) void; extern fn secp256k1_context_randomize(ctx: *secp256k1_context, seed32: ?[*]const u8) c_int; @@ // Use cmpxchg to avoid TOCTOU race: if another thread won, use their context if (`@cmpxchgStrong`(?*secp256k1_context, &global_ctx, null, ctx, .release, .acquire)) |existing| { - // Another thread already initialized; we leak our context (no destroy in libsecp256k1 static API), - // but this is harmless and only happens once. _ = existing; + secp256k1_context_destroy(ctx); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/secp256k1_c.zig` around lines 32 - 33, The code currently omits and therefore never calls the libsecp256k1 cleanup function; add an extern declaration for secp256k1_context_destroy(ctx: *secp256k1_context) void alongside the existing externs (secp256k1_context_create and secp256k1_context_randomize) and replace the intentional leak by calling secp256k1_context_destroy on the race-losing ctx instead of leaving it allocated; ensure the winning ctx remains in use and only the losing ctx is passed to secp256k1_context_destroy.src/secp256k1.zig (1)
277-337:⚠️ Potential issue | 🟠 MajorZero the
HmacSha256work buffers too.
vandkare scrubbed now, but everyhmac_*local in this function still keeps key-derived state on the stack until the frame unwinds. HmacSha256 structs contain both the outer key pad and buffered SHA256 state—both sensitive. This leaves the RFC6979 cleanup incomplete.Suggested pattern
- var hmac_d = HmacSha256.init(&k); - hmac_d.update(&v); - hmac_d.update(&[_]u8{0x00}); - hmac_d.update(&private_key); - hmac_d.update(&message_hash); - hmac_d.final(&k); + { + var hmac_d = HmacSha256.init(&k); + defer secureZero(std.mem.asBytes(&hmac_d)[0..]); + hmac_d.update(&v); + hmac_d.update(&[_]u8{0x00}); + hmac_d.update(&private_key); + hmac_d.update(&message_hash); + hmac_d.final(&k); + }Apply the same wrapper to
hmac_e,hmac_f,hmac_g,hmac_h2,hmac_k2, andhmac_v2. (Note: slice theasBytesresult with[0..]to match thesecureZerosignature.)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/secp256k1.zig` around lines 277 - 337, The HmacSha256 local instances (hmac_d, hmac_e, hmac_f, hmac_g, hmac_h2, hmac_k2, hmac_v2) retain key material on the stack and must be zeroed after use; for each HmacSha256 variable in this function, call secureZero on its byte view (use hmacX.asBytes()[0..] to match secureZero's parameter) via defer immediately after initialization so their internal buffers are scrubbed when the scope exits.src/abi_decode.zig (1)
88-95:⚠️ Potential issue | 🔴 CriticalEmpty dynamic arrays are still treated as owned allocations.
decoded_countfixes the uninitialized-prefix cleanup, but a decoded empty array still lands inresult[0..decoded_count].decodeDynamicArray()returns.array = &.{}forlength == 0, whilefreeValue(.array)always callsallocator.free(items), so a valid empty array will still trigger an invalid free during cleanup orfreeValues().Suggested fix
fn decodeDynamicArray(data: []const u8, offset: usize, element_type: AbiType, allocator: std.mem.Allocator) DecodeError!AbiValue { if (offset + 32 > data.len) return error.OffsetOutOfBounds; const length = readUint256AsUsize(data[offset..][0..32]); const elements_start = offset + 32; if (length == 0) { - const empty: []const AbiValue = &.{}; - return .{ .array = empty }; + return .{ .array = try allocator.alloc(AbiValue, 0) }; }Also applies to: 110-110
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/abi_decode.zig` around lines 88 - 95, The cleanup/free path incorrectly frees static empty-array slices returned by decodeDynamicArray() (the &.{ } case), causing invalid frees during the errdefer or freeValues(); update the logic so empty arrays are not treated as owned allocations: either change decodeDynamicArray() to return a null/owned-empty pointer for length == 0, or (simpler) modify freeValue() / freeValues() to skip allocator.free(items) when the slice length is 0 (check the value's length before calling allocator.free) so result + decoded_count cleanup won't free the static &.{ } slice. Ensure you reference and update decodeDynamicArray(), freeValue(), freeValues(), and the errdefer cleanup that iterates result[0..decoded_count].src/multicall.zig (1)
169-186:⚠️ Potential issue | 🔴 CriticalOnly clean up the initialized prefix of
results.
resultsis uninitialized afterallocator.alloc. Anyreturn error.InvalidAbiDatainside the loop beforeresults[i]is assigned makes the currenterrdeferread garbagereturn_dataslices from the remaining slots and free them.Suggested fix
var results = try allocator.alloc(Result, array_len); + var parsed_count: usize = 0; errdefer { - for (results) |r| { + for (results[0..parsed_count]) |r| { if (r.return_data.len > 0) allocator.free(r.return_data); } allocator.free(results); } @@ results[i] = .{ .success = success_word != 0, .return_data = return_data, }; + parsed_count += 1; }Also applies to: 189-222
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/multicall.zig` around lines 169 - 186, The errdefer in decodeAggregate3Results currently assumes every element of the allocated results slice is initialized and frees their return_data, which can read uninitialized memory; fix by tracking how many entries are successfully initialized (e.g., a var initialized: usize = 0) and increment it immediately after assigning results[i] (or after allocating/setting results[i].return_data), then change the errdefer to only iterate results[0..initialized] when freeing return_data and finally free the results buffer; apply the same initialized-count pattern to the other similar function(s) mentioned (the code around the second block at lines ~189-222) to avoid freeing uninitialized slots.src/abi_json.zig (1)
217-246:⚠️ Potential issue | 🟠 Major
parseTypestill accepts malformed ABI type strings.
uint257,int999, andbytes33still collapse to valid scalar types because these branches useorelsedefaults, anduint257[]/custom_type[2]are accepted even earlier by the array fast-path. That means the new nullable API still won't surfaceerror.UnknownTypefor several invalid ABI strings.Suggested fix
pub fn parseType(type_str: []const u8) ?AbiType { // Handle array suffixes - if (std.mem.endsWith(u8, type_str, "[]")) return .dynamic_array; + if (std.mem.endsWith(u8, type_str, "[]")) { + _ = parseType(type_str[0 .. type_str.len - 2]) orelse return null; + return .dynamic_array; + } // Check for fixed array (e.g., "uint256[3]") - if (type_str.len > 0 and type_str[type_str.len - 1] == ']') return .fixed_array; + if (type_str.len > 0 and type_str[type_str.len - 1] == ']') { + const open = std.mem.lastIndexOfScalar(u8, type_str, '[') orelse return null; + _ = std.fmt.parseInt(usize, type_str[open + 1 .. type_str.len - 1], 10) catch return null; + _ = parseType(type_str[0..open]) orelse return null; + return .fixed_array; + } @@ // uint types if (std.mem.startsWith(u8, type_str, "uint")) { - return parseUintType(type_str) orelse .uint256; + return if (type_str.len == 4) .uint256 else parseUintType(type_str); } // int types if (std.mem.startsWith(u8, type_str, "int")) { - return parseIntType(type_str) orelse .int256; + return if (type_str.len == 3) .int256 else parseIntType(type_str); } // bytesN types if (std.mem.startsWith(u8, type_str, "bytes")) { - return parseBytesType(type_str) orelse .bytes; + return if (type_str.len == 5) .bytes else parseBytesType(type_str); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/abi_json.zig` around lines 217 - 246, parseType currently accepts malformed ABI strings because it defaults to .uint256/.int256/.bytes on parse failures and treats any trailing "[]" or "]" as arrays without validating the element type; update parseType so array fast-paths validate the inner type (for "[]": strip the "[]" and recursively call parseType on the base and return null if that fails; for fixed arrays: locate the '[' of the last bracket, validate the base portion with parseType and ensure the bracket content is a non-empty numeric literal, returning null on failure), and stop using orelse defaults—call parseUintType(type_str), parseIntType(type_str), and parseBytesType(type_str) and propagate null if they return null instead of falling back to default scalar kinds.
🧹 Nitpick comments (1)
src/transaction.zig (1)
630-653: Consider adding a test for large chain IDs to validate theu256fix.The existing tests use small
vvalues (37, 1, 0) that fit inu8. Since the PR specifically fixes support for large chain IDs, adding a test withchain_id > 110(e.g.,chain_id = 1_000_000) would verify the fix works end-to-end.Example test case for large chain ID
test "signed legacy tx with large chain ID" { const allocator = std.testing.allocator; const chain_id: u64 = 1_000_000; const tx = Transaction{ .legacy = .{ .nonce = 0, .gas_price = 20_000_000_000, .gas_limit = 21000, .to = [_]u8{0x35} ** 20, .value = 0, .data = &.{}, .chain_id = chain_id, } }; const r = [_]u8{0x01} ** 32; const s = [_]u8{0x02} ** 32; // v = chain_id * 2 + 35 + recovery_id = 2_000_035 (exceeds u8) const v: u256 = chain_id * 2 + 35; const signed = try serializeSigned(allocator, tx, r, s, v); defer allocator.free(signed); try std.testing.expect(signed[0] >= 0xc0); try std.testing.expect(signed.len > 60); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/transaction.zig` around lines 630 - 653, Add a test that exercises large chain IDs to validate the u256 fix: create a Transaction with a large chain_id (e.g., 1_000_000) in the legacy branch, compute v as a u256 value (v = chain_id * 2 + 35 [+ recovery_id if needed]), call serializeSigned(allocator, tx, r, s, v) and assert the result is RLP-encoded (e.g., signed[0] >= 0xc0) and non-trivial length; reference the Transaction struct, serializeSigned function, and use u256 for v so the test verifies handling of values > u8/110 end-to-end.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/multicall.zig`:
- Around line 173-175: The bounds checks that add constants to offsets (e.g.,
array_offset + 32 > data.len) are unsafe because readWord can saturate to
maxInt(usize); change them to overflow-safe comparisons: after reading
array_offset with readWord, first validate it is not maxInt(usize) (or otherwise
detect saturation) and then check bounds by comparing with data.len using
subtraction (e.g., require data.len >= 32 and array_offset <= data.len - 32) or
use a checked-add helper that returns an error on overflow; apply the same
pattern to the other checks mentioned (tuple_start + 64, tuple_start + 32, and
return_data_start + return_data_len) so each uses either checked addition or a
data.len >= constant && offset <= data.len - constant style check and returns
error.InvalidAbiData on failure.
In `@src/provider.zig`:
- Around line 652-658: The errdefer cleanup currently frees log.data only when
log.data.len > 0, which misses zero-length allocations (allocator.alloc(u8, 0))
and leaks; change the cleanup loops that iterate over logs to unconditionally
call allocator.free(log.data) (and still free log.topics conditionally if
needed) using allocator.free(log.data) for every log. Update all three cleanup
sites (the errdefer blocks that free logs/log.data/log.topics—the one shown plus
the other two at the same pattern) so they always free log.data while keeping
the existing allocator.free(logs) and allocator.free(log.topics) logic as
appropriate.
- Around line 699-712: The current loop lets parseSingleLog allocate into
temporaries so if it fails mid-way those allocations aren't visible to the
errdefer; fix by making the destination visible before parseSingleLog runs:
increment parsed_count (or otherwise mark logs[i] as in-use) and call a variant
of parseSingleLog that writes directly into logs[i] (e.g.,
parseSingleLogAlloc(allocator, &logs[i], item.object) or pass &logs[i] as the
target), then only assign or finalize as needed; apply the same pattern to the
analogous block around lines 797-810 so partial allocations are always tracked
by the errdefer cleanup.
In `@src/signer.zig`:
- Around line 19-22: The methods on Signer (address, signHash, signMessage) use
value receivers (self: Signer) causing the 32-byte private_key to be copied and
left unscrubbable; change their receivers to pointer receivers (self: *Signer)
so callers operate on the original instance and no stack copies of private_key
are produced, keep deinit as pub fn deinit(self: *Signer) void and update all
internal and external call sites to pass a pointer (or use &signer) and adjust
any function signatures that forward these methods to use *Signer as well so the
private key is only ever handled by-reference.
In `@src/utils/units.zig`:
- Around line 18-21: f64ToU256 currently only rejects negative and non-finite
inputs but can still overflow when converting very large positive f64 to u128;
add an explicit upper-bound check before performing `@intFromFloat/`@as(u128) so
the conversion cannot exceed u128 range and cause UB. Concretely, compute the
maximum finite f64 that safely represents u128 max (or convert
`@intToFloat`(u128.max) to f64) and if value is greater than that return null (or
an error), otherwise proceed with `@intFromFloat` and the `@as`(u128)/@as(u256)
casts; ensure the check references f64ToU256, u128, u256 and `@intFromFloat` so
it's easy to locate and follow.
In `@src/ws_transport.zig`:
- Around line 254-256: The current validateHandshakeResponse function
incorrectly searches the whole response for "101"; update
validateHandshakeResponse to inspect only the HTTP status line by locating the
first CRLF in response, taking the slice before that as the status line, and
comparing it exactly (or via startsWith) against "HTTP/1.1 101" and "HTTP/1.0
101" instead of using std.mem.indexOf on "101"; keep the subsequent
Sec-WebSocket-Accept check using expected_accept intact and return false if the
status line does not match the allowed values.
---
Outside diff comments:
In `@src/abi_decode.zig`:
- Around line 88-95: The cleanup/free path incorrectly frees static empty-array
slices returned by decodeDynamicArray() (the &.{ } case), causing invalid frees
during the errdefer or freeValues(); update the logic so empty arrays are not
treated as owned allocations: either change decodeDynamicArray() to return a
null/owned-empty pointer for length == 0, or (simpler) modify freeValue() /
freeValues() to skip allocator.free(items) when the slice length is 0 (check the
value's length before calling allocator.free) so result + decoded_count cleanup
won't free the static &.{ } slice. Ensure you reference and update
decodeDynamicArray(), freeValue(), freeValues(), and the errdefer cleanup that
iterates result[0..decoded_count].
In `@src/abi_json.zig`:
- Around line 217-246: parseType currently accepts malformed ABI strings because
it defaults to .uint256/.int256/.bytes on parse failures and treats any trailing
"[]" or "]" as arrays without validating the element type; update parseType so
array fast-paths validate the inner type (for "[]": strip the "[]" and
recursively call parseType on the base and return null if that fails; for fixed
arrays: locate the '[' of the last bracket, validate the base portion with
parseType and ensure the bracket content is a non-empty numeric literal,
returning null on failure), and stop using orelse defaults—call
parseUintType(type_str), parseIntType(type_str), and parseBytesType(type_str)
and propagate null if they return null instead of falling back to default scalar
kinds.
In `@src/mnemonic.zig`:
- Around line 175-199: The code builds fixed-size buffers mnemonic_buf and
salt_buf without bounds checks and can overflow when words or passphrase are too
large; update the toSeed logic to validate sizes before copying: check each
word.len will fit into mnemonic_buf (use mnemonic_len + word.len <=
mnemonic_buf.len) and ensure the passphrase length plus prefix fits salt_buf
(prefix.len + passphrase.len <= salt_buf.len); if a check fails, return an
appropriate error (or slice an error result) instead of writing, and keep the
existing defer secureZero calls; reference mnemonic_buf, mnemonic_len, salt_buf,
prefix, words, and passphrase when applying these checks.
In `@src/multicall.zig`:
- Around line 169-186: The errdefer in decodeAggregate3Results currently assumes
every element of the allocated results slice is initialized and frees their
return_data, which can read uninitialized memory; fix by tracking how many
entries are successfully initialized (e.g., a var initialized: usize = 0) and
increment it immediately after assigning results[i] (or after allocating/setting
results[i].return_data), then change the errdefer to only iterate
results[0..initialized] when freeing return_data and finally free the results
buffer; apply the same initialized-count pattern to the other similar
function(s) mentioned (the code around the second block at lines ~189-222) to
avoid freeing uninitialized slots.
In `@src/secp256k1_c.zig`:
- Around line 32-33: The code currently omits and therefore never calls the
libsecp256k1 cleanup function; add an extern declaration for
secp256k1_context_destroy(ctx: *secp256k1_context) void alongside the existing
externs (secp256k1_context_create and secp256k1_context_randomize) and replace
the intentional leak by calling secp256k1_context_destroy on the race-losing ctx
instead of leaving it allocated; ensure the winning ctx remains in use and only
the losing ctx is passed to secp256k1_context_destroy.
In `@src/secp256k1.zig`:
- Around line 277-337: The HmacSha256 local instances (hmac_d, hmac_e, hmac_f,
hmac_g, hmac_h2, hmac_k2, hmac_v2) retain key material on the stack and must be
zeroed after use; for each HmacSha256 variable in this function, call secureZero
on its byte view (use hmacX.asBytes()[0..] to match secureZero's parameter) via
defer immediately after initialization so their internal buffers are scrubbed
when the scope exits.
In `@src/ws_transport.zig`:
- Around line 1-1164: The file fails zig fmt checks; run the formatter and
commit the changes so CI passes—specifically run `zig fmt src/ws_transport.zig`
(or `zig fmt --write`/IDE formatter) to reformat the entire file including
top-level declarations like parseUrl, encodeFrame, decodeFrameHeader,
buildHandshakeRequest, computeAcceptKey, WsTransport and its methods (connect,
initTls, performHandshake, sendFrame, readFrame, writeAll, readSome). Ensure no
manual code changes other than whitespace/formatting are made and re-run `zig
fmt --check` to verify.
- Around line 442-448: The current substring match using id_buf/id_str and
std.mem.indexOf against frame_data can produce false positives (e.g. id=1
matching "id":12) or miss spaced JSON; instead parse the incoming envelope and
compare the parsed "id" field for exact numeric equality to the expected id.
Replace the std.mem.indexOf check with a JSON parse of frame_data (using Zig's
std.json parser or equivalent), extract the top-level "id" value and compare it
numerically to the expected id, returning frame_data only on an exact match and
handling parse errors/absent id appropriately.
---
Nitpick comments:
In `@src/transaction.zig`:
- Around line 630-653: Add a test that exercises large chain IDs to validate the
u256 fix: create a Transaction with a large chain_id (e.g., 1_000_000) in the
legacy branch, compute v as a u256 value (v = chain_id * 2 + 35 [+ recovery_id
if needed]), call serializeSigned(allocator, tx, r, s, v) and assert the result
is RLP-encoded (e.g., signed[0] >= 0xc0) and non-trivial length; reference the
Transaction struct, serializeSigned function, and use u256 for v so the test
verifies handling of values > u8/110 end-to-end.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1e41afd3-1bdd-417c-8cc6-50a584102221
📒 Files selected for processing (27)
src/abi_decode.zigsrc/abi_encode.zigsrc/abi_json.zigsrc/chains/chain.zigsrc/dex/router.zigsrc/dex/v2.zigsrc/eip155.zigsrc/eip712.zigsrc/erc20.zigsrc/erc721.zigsrc/hd_wallet.zigsrc/keccak_xkcp.zigsrc/mnemonic.zigsrc/multicall.zigsrc/provider.zigsrc/rlp.zigsrc/secp256k1.zigsrc/secp256k1_c.zigsrc/signer.zigsrc/subscription.zigsrc/transaction.zigsrc/uint256.zigsrc/utils/constants.zigsrc/utils/units.zigsrc/wallet.zigsrc/ws_transport.zigtests/unit_tests.zig
| pub fn encodeValues(allocator: std.mem.Allocator, values: []const AbiValue) EncodeError![]u8 { | ||
| if (values.len > max_tuple_values) return error.TooManyValues; | ||
| const total = calcEncodedSize(values); |
There was a problem hiding this comment.
The 32-value limit has to recurse into nested collections.
Lines 65 and 76 only cap the outer argument list. A single top-level .array, .tuple, or .fixed_array with 33 children still reaches the fixed-size offsets buffers at Lines 140, 250, and 306, so the encoder will fail inside the new stack-based path instead of returning error.TooManyValues.
Suggested fix
+fn validateValueArity(values: []const AbiValue) EncodeError!void {
+ if (values.len > max_tuple_values) return error.TooManyValues;
+ for (values) |value| switch (value) {
+ .array, .fixed_array, .tuple => |items| try validateValueArity(items),
+ else => {},
+ };
+}
+
pub fn encodeValues(allocator: std.mem.Allocator, values: []const AbiValue) EncodeError![]u8 {
- if (values.len > max_tuple_values) return error.TooManyValues;
+ try validateValueArity(values);
const total = calcEncodedSize(values);
const buf = try allocator.alloc(u8, total);
@@
pub fn encodeFunctionCall(allocator: std.mem.Allocator, selector: [4]u8, values: []const AbiValue) EncodeError![]u8 {
- if (values.len > max_tuple_values) return error.TooManyValues;
+ try validateValueArity(values);
const total = 4 + calcEncodedSize(values);
const buf = try allocator.alloc(u8, total);Also applies to: 75-77, 139-140, 249-250, 305-306
| const array_offset = readWord(data[0..32]); | ||
| if (array_offset + 32 > data.len) return error.OutOfMemory; | ||
| if (array_offset + 32 > data.len) return error.InvalidAbiData; | ||
|
|
There was a problem hiding this comment.
Make the offset bounds checks overflow-safe.
readWord() saturates oversized words to maxInt(usize). With that input, expressions like array_offset + 32, tuple_start + 64, and return_data_start + return_data_len overflow before the comparison, so malformed RPC data can abort here instead of returning InvalidAbiData.
Suggested fix
- if (array_offset + 32 > data.len) return error.InvalidAbiData;
+ const array_end = std.math.add(usize, array_offset, 32) catch return error.InvalidAbiData;
+ if (array_end > data.len) return error.InvalidAbiData;
@@
- const offset_pos = array_data_start + i * 32;
- if (offset_pos + 32 > data.len) return error.InvalidAbiData;
+ const offset_pos = std.math.add(
+ usize,
+ array_data_start,
+ std.math.mul(usize, i, 32) catch return error.InvalidAbiData,
+ ) catch return error.InvalidAbiData;
+ const offset_end = std.math.add(usize, offset_pos, 32) catch return error.InvalidAbiData;
+ if (offset_end > data.len) return error.InvalidAbiData;Apply the same pattern to the later + 64, + 32, and + return_data_len checks.
Also applies to: 190-209
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/multicall.zig` around lines 173 - 175, The bounds checks that add
constants to offsets (e.g., array_offset + 32 > data.len) are unsafe because
readWord can saturate to maxInt(usize); change them to overflow-safe
comparisons: after reading array_offset with readWord, first validate it is not
maxInt(usize) (or otherwise detect saturation) and then check bounds by
comparing with data.len using subtraction (e.g., require data.len >= 32 and
array_offset <= data.len - 32) or use a checked-add helper that returns an error
on overflow; apply the same pattern to the other checks mentioned (tuple_start +
64, tuple_start + 32, and return_data_start + return_data_len) so each uses
either checked addition or a data.len >= constant && offset <= data.len -
constant style check and returns error.InvalidAbiData on failure.
| const logs = try allocator.alloc(receipt_mod.Log, arr.items.len); | ||
| errdefer allocator.free(logs); | ||
| var parsed_count: usize = 0; | ||
| errdefer { | ||
| for (logs[0..parsed_count]) |log| { | ||
| if (log.data.len > 0) allocator.free(log.data); | ||
| if (log.topics.len > 0) allocator.free(log.topics); | ||
| } | ||
| allocator.free(logs); | ||
| } | ||
|
|
||
| for (arr.items, 0..) |item, i| { | ||
| if (item != .object) return error.InvalidResponse; | ||
| logs[i] = try parseSingleLog(allocator, item.object); | ||
| parsed_count += 1; |
There was a problem hiding this comment.
parsed_count still misses leaks from the current log.
parsed_count only covers entries already stored in logs[i]. parseSingleLog() allocates data before parsing topics and the remaining fields, so if it fails midway the current log is never visible to these errdefers and its allocations leak.
Suggested fix
fn parseSingleLog(allocator: std.mem.Allocator, obj: std.json.ObjectMap) !receipt_mod.Log {
const address = (try parseOptionalAddress(jsonGetString(obj, "address"))) orelse return error.InvalidResponse;
const data_str = jsonGetString(obj, "data") orelse "0x";
const data = try parseHexBytes(allocator, data_str);
+ errdefer allocator.free(data);
// Parse topics array
const topics = try parseTopics(allocator, obj);
+ errdefer if (topics.len > 0) allocator.free(topics);Also applies to: 797-810
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/provider.zig` around lines 699 - 712, The current loop lets
parseSingleLog allocate into temporaries so if it fails mid-way those
allocations aren't visible to the errdefer; fix by making the destination
visible before parseSingleLog runs: increment parsed_count (or otherwise mark
logs[i] as in-use) and call a variant of parseSingleLog that writes directly
into logs[i] (e.g., parseSingleLogAlloc(allocator, &logs[i], item.object) or
pass &logs[i] as the target), then only assign or finalize as needed; apply the
same pattern to the analogous block around lines 797-810 so partial allocations
are always tracked by the errdefer cleanup.
| /// Securely zero the private key. Call when the Signer is no longer needed. | ||
| pub fn deinit(self: *Signer) void { | ||
| secureZero(&self.private_key); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the signer.zig file to confirm the method signatures
fd signer.zigRepository: StrobeLabs/eth.zig
Length of output: 76
🏁 Script executed:
# Read the entire signer.zig file to understand the structure and method signatures
cat -n src/signer.zigRepository: StrobeLabs/eth.zig
Length of output: 11957
🏁 Script executed:
# Find and read wallet.zig
fd wallet.zigRepository: StrobeLabs/eth.zig
Length of output: 120
🏁 Script executed:
# Read wallet.zig to see the related code
cat -n src/wallet.zigRepository: StrobeLabs/eth.zig
Length of output: 13200
Change Signer method receivers from by-value to pointer to avoid unscrubbable copies of the private key.
address(), signHash(), and signMessage() currently take self: Signer (by value), which copies the entire 32-byte private key on each call. The new deinit() only scrubs the original instance, leaving these copies unscrubbable in stack memory.
Suggested fix
- pub fn address(self: Signer) secp256k1.SignError!primitives.Address {
+ pub fn address(self: *const Signer) secp256k1.SignError!primitives.Address {
const pubkey = try secp256k1.derivePublicKey(self.private_key);
return secp256k1.pubkeyToAddress(pubkey);
}
- pub fn signHash(self: Signer, message_hash: [32]u8) secp256k1.SignError!Signature {
+ pub fn signHash(self: *const Signer, message_hash: [32]u8) secp256k1.SignError!Signature {
return secp256k1.sign(self.private_key, message_hash);
}
- pub fn signMessage(self: Signer, message: []const u8) secp256k1.SignError!Signature {
+ pub fn signMessage(self: *const Signer, message: []const u8) secp256k1.SignError!Signature {
const prefixed_hash = hashPersonalMessage(message);
return self.signHash(prefixed_hash);
}This improves the cleanup guarantee at the public API level. Downstream functions in secp256k1 may still take the key by value, but the top-level callers should not create unnecessary copies.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/signer.zig` around lines 19 - 22, The methods on Signer (address,
signHash, signMessage) use value receivers (self: Signer) causing the 32-byte
private_key to be copied and left unscrubbable; change their receivers to
pointer receivers (self: *Signer) so callers operate on the original instance
and no stack copies of private_key are produced, keep deinit as pub fn
deinit(self: *Signer) void and update all internal and external call sites to
pass a pointer (or use &signer) and adjust any function signatures that forward
these methods to use *Signer as well so the private key is only ever handled
by-reference.
| pub fn validateHandshakeResponse(response: []const u8, expected_accept: []const u8) bool { | ||
| // Check for HTTP 101 status | ||
| if (!containsSubstring(response, "101")) return false; | ||
| if (std.mem.indexOf(u8, response, "101") == null) return false; |
There was a problem hiding this comment.
Validate the HTTP status line, not any "101" substring.
A response like HTTP/1.1 401 Unauthorized\r\nX-Debug: 101\r\n... passes this check even though the upgrade failed. Compare the first status line against HTTP/1.1 101/HTTP/1.0 101 instead of searching the whole buffer.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/ws_transport.zig` around lines 254 - 256, The current
validateHandshakeResponse function incorrectly searches the whole response for
"101"; update validateHandshakeResponse to inspect only the HTTP status line by
locating the first CRLF in response, taking the slice before that as the status
line, and comparing it exactly (or via startsWith) against "HTTP/1.1 101" and
"HTTP/1.0 101" instead of using std.mem.indexOf on "101"; keep the subsequent
Sec-WebSocket-Accept check using expected_accept intact and return false if the
status line does not match the allowed values.
- provider.zig: remove len > 0 guards on log.data free (parseHexBytes allocates zero-length slices that must be freed) - units.zig: add upper bound check in f64ToU256 to prevent UB when float exceeds u128 max; add overflow test - secp256k1_c.zig: add secp256k1_context_destroy extern; destroy redundant context on cmpxchg failure instead of leaking
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/provider.zig`:
- Around line 700-712: parseSingleLog can allocate log.data and log.topics then
fail (e.g., in parseTopics), leaking memory because parsed_count isn't
incremented until after the call returns; add errdefer guards inside
parseSingleLog to free any partially-allocated resources on error (free log.data
and log.topics via allocator) and only cancel the errdefer on successful
completion, so callers (the loop using parsed_count and logs) don't leak; apply
the same pattern to the other similar parsing call site where
parseSingleLog/parseTopics can fail.
In `@src/utils/units.zig`:
- Around line 18-22: The current boundary check in f64ToU256 allows value ==
MAX_U128_F64 (derived from `@floatFromInt`(std.math.maxInt(u128))) which rounds to
2^128 and causes `@intFromFloat` to panic; change the guard in f64ToU256 to reject
values >= TWO_POW_128_F64 (i.e., use >= instead of > and compare against a
constant TWO_POW_128_F64 representing 2^128 as f64) so the 2^128 boundary is
exclusive and parseEther/parseGwei retain null-on-overflow behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 62242654-58a9-4fb1-9fcb-b3e3f8591d5c
📒 Files selected for processing (3)
src/provider.zigsrc/secp256k1_c.zigsrc/utils/units.zig
- units.zig: use >= TWO_POW_128_F64 boundary (MAX_U128_F64 rounds to 2^128 in f64, causing @intFromFloat panic at the boundary) - provider.zig: add errdefer guards inside parseSingleLog to free partially-allocated data/topics if subsequent parsing fails
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/provider.zig (2)
699-712: Add malformed-log regression tests for these cleanup paths.The current tests exercise successful and empty log parsing, but not the new
errdeferbranches. A couple of negative cases usingstd.testing.allocatorleak detection—e.g. validdatawith an invalid topic, and valid topics with an invalid trailing field—would pin down both the innerparseSingleLogcleanup and the outerparsed_countcleanup.Also applies to: 723-727, 800-812
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/provider.zig` around lines 699 - 712, Add negative unit tests that trigger the errdefer cleanup paths around parseSingleLog and the outer parsed_count cleanup by using std.testing.allocator leak detection; create cases in tests for provider.zig that feed a valid data field with an invalid topic and valid topics with an invalid trailing field so parseSingleLog returns an error and verify no allocator leaks occur. Target the code paths around parseSingleLog, the local logs array, parsed_count, and the errdefer blocks (the segments shown around lines 699-712, 723-727, and 800-812) so both inner-resource cleanup and the outer loop cleanup are exercised. Ensure tests construct malformed log JSON objects that exercise those specific failure points and assert allocator.free balance after the error is returned.
652-658: Extract the repeated log cleanup into one helper.The three
errdeferblocks now encode the same ownership rules. Pulling that into a smallfreeLogshelper would make future cleanup fixes much harder to miss in one path.♻️ Possible refactor
+fn freeLogs(allocator: std.mem.Allocator, logs: []const receipt_mod.Log) void { + for (logs) |log| { + allocator.free(log.data); + if (log.topics.len > 0) allocator.free(log.topics); + } + if (logs.len > 0) allocator.free(logs); +} + - errdefer { - for (logs) |log| { - allocator.free(log.data); - if (log.topics.len > 0) allocator.free(log.topics); - } - if (logs.len > 0) allocator.free(logs); - } + errdefer freeLogs(allocator, logs);Also applies to: 701-707, 801-807
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/provider.zig` around lines 652 - 658, Extract the repeated cleanup into a single helper named freeLogs(allocator, logs) that iterates the provided logs array, frees each log.data, frees log.topics when log.topics.len > 0, and finally frees the logs array when logs.len > 0; then replace each errdefer block that currently uses allocator, logs, log.data, and log.topics (the three duplicated blocks at the shown locations) with a single call to freeLogs(allocator, logs) so all paths share the same ownership cleanup logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/utils/units.zig`:
- Around line 33-40: The docs for parseEther and parseGwei are incomplete: they
state null is returned for negative or non-finite input but omit that f64ToU256
also returns null for out-of-range/overflow values; update the public comments
on parseEther and parseGwei to explicitly mention that they return null for
negative, non-finite, or out-of-range (overflow) inputs caused by converting
ether*gwei multipliers (ETHER_F64, GWEI_F64) via f64ToU256 so callers aren’t
surprised when large finite values produce null.
---
Nitpick comments:
In `@src/provider.zig`:
- Around line 699-712: Add negative unit tests that trigger the errdefer cleanup
paths around parseSingleLog and the outer parsed_count cleanup by using
std.testing.allocator leak detection; create cases in tests for provider.zig
that feed a valid data field with an invalid topic and valid topics with an
invalid trailing field so parseSingleLog returns an error and verify no
allocator leaks occur. Target the code paths around parseSingleLog, the local
logs array, parsed_count, and the errdefer blocks (the segments shown around
lines 699-712, 723-727, and 800-812) so both inner-resource cleanup and the
outer loop cleanup are exercised. Ensure tests construct malformed log JSON
objects that exercise those specific failure points and assert allocator.free
balance after the error is returned.
- Around line 652-658: Extract the repeated cleanup into a single helper named
freeLogs(allocator, logs) that iterates the provided logs array, frees each
log.data, frees log.topics when log.topics.len > 0, and finally frees the logs
array when logs.len > 0; then replace each errdefer block that currently uses
allocator, logs, log.data, and log.topics (the three duplicated blocks at the
shown locations) with a single call to freeLogs(allocator, logs) so all paths
share the same ownership cleanup logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 36d6500d-d152-4e9e-bbaf-06ec984072b1
📒 Files selected for processing (3)
src/provider.zigsrc/utils/units.zigsrc/ws_transport.zig
Summary
Comprehensive fix pass addressing 21 issues identified during code review, organized by severity:
Priority Fixes (Security/Critical)
secureZeroutility with volatile writes; zero private keys, HMAC intermediates, seeds, and nonces across all crypto modules (secp256k1, signer, wallet, hd_wallet, mnemonic)Signer.deinit()andWallet.deinit()methodserrdeferinabi_decodeto track decoded count on partial failurehd_wallet.toAddressto return error instead of silently returning zero addressabi_encode@panicwith optional returns index/v2math functionseip712with allocator-based approachprovider.zigerrdefer blocks to free inner allocations (data/topics)ws_transportandsubscriptionwithstd.memequivalents (~70 lines removed)unit_tests.zig@compileErrorfor unsupported RLP types instead of returning 0Medium Fixes (Correctness/Quality)
abi_json.parseTypereturnsnullfor unknown types instead of silently falling back touint256multicalluseserror.InvalidAbiDatainstead oferror.OutOfMemoryfor malformed dataeip155.recoverFromEip155Vreturns?u8(null on invalid) instead of ambiguous0parseEther/parseGweireturn?u256(null on negative/non-finite input)keccak_xkcpuses@panicon C FFI failures instead ofdebug.assert(elided in release builds)abi_encode; remove unused allocator paramvparameter fromu8tou256in transaction serialization for chain_id > 110erc20/erc721usefreeReturnValuesinstead ofallocator.freeto free inner string allocationssecp256k1_cglobal context init withcmpxchgchain.addressFromHexcomptime-only (compile error on bad hex instead of runtime unreachable)Test plan
zig buildpasseszig build testpasses (all existing + new tests)zig build benchpasses with no regressionsSummary by CodeRabbit
New Features
Bug Fixes
Bug Fixes / Errors
Refactor